1,137 research outputs found

    Instant restore after a media failure

    Full text link
    Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read/write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques

    Sampling-Based Query Re-Optimization

    Full text link
    Despite of decades of work, query optimizers still make mistakes on "difficult" queries because of bad cardinality estimates, often due to the interaction of multiple predicates and correlations in the data. In this paper, we propose a low-cost post-processing step that can take a plan produced by the optimizer, detect when it is likely to have made such a mistake, and take steps to fix it. Specifically, our solution is a sampling-based iterative procedure that requires almost no changes to the original query optimizer or query evaluation mechanism of the system. We show that this indeed imposes low overhead and catches cases where three widely used optimizers (PostgreSQL and two commercial systems) make large errors.Comment: This is the extended version of a paper with the same title and authors that appears in the Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD 2016

    Main Memory Implementations for Binary Grouping

    Full text link
    An increasing number of applications depend on efficient storage and analysis features for XML data. Hence, query optimization and efficient evaluation techniques for the emerging XQuery standard become more and more important. Many XQuery queries require nested expressions. Unnesting them often introduces binary grouping. We introduce several algorithms implementing binary grouping and analyze their time and space complexity. Experiments demonstrate their performance

    Bloch oscillations of Bose-Einstein condensates: Quantum counterpart of dynamical instability

    Full text link
    We study the Bloch dynamics of a quasi one-dimensional Bose-Einstein condensate of cold atoms in a tilted optical lattice modeled by a Hamiltonian of Bose-Hubbard type: The corresponding mean-field system described by a discrete nonlinear Schr\"odinger equation can show a dynamical (or modulation) instability due to chaotic dynamics and equipartition over the quasimomentum modes. It is shown, that these phenomena are related to a depletion of the Floquet-Bogoliubov states and a decoherence of the condensate in the many-particle description. Three different types of dynamics are distinguished: (i) decaying oscillations in the region of dynamical instability, and (ii) persisting Bloch oscillations or (iii) periodic decay and revivals in the region of stability.Comment: 12 pages, 14 figure

    Untersuchungen zu Produktion und Zerfall schwerer Baryonen mit dem Experiment ALEPH am Speicherring LEP

    Get PDF

    Adaptive indexing in modern database kernels

    Get PDF
    Physical design represents one of the hardest problems for database management systems. Without proper tuning, systems cannot achieve good performance. Offline indexing creates indexes a priori assuming good workload knowledge and idle time. More recently, online indexing monitors the workload trends and creates or drops indexes online. Adaptive indexing takes another step towards completely automating the tuning process of a database system, by enabling incremental and partial online indexing. The main idea is that physical design changes continuously, adaptively, partially, incrementally and on demand while processing queries as part of the execution operators. As such it brings a plethora of opportunities for rethinking and improving every single corner of database system design. We will analyze the indexing space between offline, online and adaptive indexing through several state of the art indexing techniques, e. g., what-if analysis and soft indexes. We will discuss in detail adaptive indexing techniques such as database cracking, adaptive merging, sideways cracking and various hybrids that try to balance the online tuning overhead with the convergence speed to optimal performance. In addition, we will discuss how various aspects of modern techniques for database architectures, such as vectorization, bulk processing, column-store execution and storage affect adaptive indexing. Finally, we will discuss several open research topics towards fully automomous database kernels

    Accuracy of Combined Forecasts for the 2012 Presidential Election: The PollyVote

    Get PDF
    We review the performance of the PollyVote, which combined forecasts from polls, prediction markets, experts’ judgment, political economy models, and index models to predict the two-party popular vote in the 2012 US presidential election. Throughout the election year the PollyVote provided highly accurate forecasts, outperforming each of its component methods, as well as the forecasts from FiveThirtyEight.com. Gains in accuracy were particularly large early in the campaign, when uncertainty about the election outcome is typically high. The results confirm prior research showing that combining is one of the most effective approaches to generating accurate forecasts

    Event Stream Processing with Multiple Threads

    Full text link
    Current runtime verification tools seldom make use of multi-threading to speed up the evaluation of a property on a large event trace. In this paper, we present an extension to the BeepBeep 3 event stream engine that allows the use of multiple threads during the evaluation of a query. Various parallelization strategies are presented and described on simple examples. The implementation of these strategies is then evaluated empirically on a sample of problems. Compared to the previous, single-threaded version of the BeepBeep engine, the allocation of just a few threads to specific portions of a query provides dramatic improvement in terms of running time

    The PollyVote Forecast for the 2016 American Presidential Election

    Get PDF
    The PollyVote applies a century-old principle of combining different evidence-based methods for forecasting the outcome of American presidential elections. In this article, we discuss the principles followed in constructing the PollyVote formula, summarize its components, review the accuracy of its previous forecasts, and make a prediction for this year\u27s presidential election

    Towards a Landau-Zener formula for an interacting Bose-Einstein condensate

    Full text link
    We consider the Landau-Zener problem for a Bose-Einstein condensate in a linearly varying two-level system, for the full many-particle system as well and in the mean-field approximation. The many-particle problem can be solved approximately within an independent crossings approximation, which yields an explicit Landau-Zener formula.Comment: RevTeX, 8 pages, 9 figure
    • 

    corecore